Multi-modal sensory data plays an important role in many computer vision and robotics tasks. One popular\nmulti-modal pair is cameras and laser scanners. To overlay and jointly use the data from both modalities, it is necessary\nto calibrate the sensors, i.e., to obtain the spatial relation between the sensors.\nComputing such a calibration is challenging as both sensors provide quite different data: cameras yield color or\nbrightness information, laser scanners yield 3-D points. However, several laser scanners additionally provide\nreflectances, which turn out to make calibration to a camera well feasible. To this end, we first estimate a rough\nalignment of the coordinate systems of both modalities. Then, we use the laser scanner reflectances to compute a\nvirtual image of the scene. Stereo calibration on the virtual image and the camera image are then used to compute a\nrefined, high-accuracy calibration.\nIt is encouraging that the accuracies in our experiments are comparable to camera-camera stereo setups and\noutperform another of other target-based calibration approach. This shows that the proposed algorithm reliably\nintegrates the point cloud with the intensity image. As an example application, we use the calibration results to\nobtain ground-truth distance images for range cameras. Furthermore, we utilize this data to investigate the accuracy\nof the Microsoft Kinect V2 time-of-flight and the Intel RealSense R200 structured light camera.
Loading....